Double Doubly Robust Thompson Sampling for Generalized Linear Contextual Bandits

نویسندگان

چکیده

We propose a novel algorithm for generalized linear contextual bandits (GLBs) with regret bound sublinear to the time horizon, minimum eigenvalue of covariance contexts and lower variance rewards. In several identified cases, our result is first achieving dimension without discarding observed Previous approaches achieve by rewards, whereas achieves incorporating from all arms in double doubly robust (DDR) estimator. The DDR estimator subclass but tighter error bound. also provide logarithmic cumulative under probabilistic margin condition. This condition models or GLMs when are different coefficients common. conduct empirical studies using synthetic data real examples, demonstrating effectiveness algorithm.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generalized Thompson Sampling for Contextual Bandits

Thompson Sampling, one of the oldest heuristics for solving multi-armed bandits, has recently been shown to demonstrate state-of-the-art performance. The empirical success has led to great interests in theoretical understanding of this heuristic. In this paper, we approach this problem in a way very different from existing efforts. In particular, motivated by the connection between Thompson Sam...

متن کامل

Thompson Sampling for Contextual Bandits with Linear Payoffs

Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the stateof-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we d...

متن کامل

Thompson Sampling for Contextual Bandits with Linear Payoffs

The following lemma is implied by Theorem 1 in Abbasi-Yadkori et al. (2011): Lemma 7. (Abbasi-Yadkori et al., 2011) Let (F ′ t; t ≥ 0) be a filtration, (mt; t ≥ 1) be an R-valued stochastic process such that mt is (F ′ t−1)-measurable, (ηt; t ≥ 1) be a real-valued martingale difference process such that ηt is (F ′ t)-measurable. For t ≥ 0, define ξt = ∑t τ=1mτητ and Mt = Id + ∑t τ=1mτm T τ , wh...

متن کامل

Double Thompson Sampling for Dueling Bandits

In this paper, we propose a Double Thompson Sampling (D-TS) algorithm for dueling bandit problems. As its name suggests, D-TS selects both the first and the second candidates according to Thompson Sampling. Specifically, D-TS maintains a posterior distribution for the preference matrix, and chooses the pair of arms for comparison according to two sets of samples independently drawn from the pos...

متن کامل

Thompson Sampling for Budgeted Multi-Armed Bandits

Thompson sampling is one of the earliest randomized algorithms for multi-armed bandits (MAB). In this paper, we extend the Thompson sampling to Budgeted MAB, where there is random cost for pulling an arm and the total cost is constrained by a budget. We start with the case of Bernoulli bandits, in which the random rewards (costs) of an arm are independently sampled from a Bernoulli distribution...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i7.26001